4 research outputs found
Self-Aware resource management in embedded systems
Resource management for modern embedded systems is challenging in the presence of dynamic workloads, limited energy and power budgets, and application and user requirements. These diverse and dynamic requirements often result in conflicting objectives that need to be handled by intelligent and self-aware resource management. State-of-the-art resource management approaches leverage offline and online machine learning techniques for handling such complexity. However, these approaches focus on fixed objectives, limiting their adaptability to dynamically evolving requirements at run-time.
In this dissertation, we first propose resource management approaches with fixed objectives for handling concurrent dynamic workload scenarios, mixed-sensitivity workloads, and user requirements and battery constraints. Then, we propose comprehensive self-aware resource management for handling multiple dynamic objectives at run-time. The proposed resource management approaches in this dissertation use machine learning techniques for offline modeling and online controlling. In each resource management approach, we consider a dynamic set of requirements that had not been considered in the state-of-the-art approaches and improve the selfawareness of resource management by learning applications characteristics, users’ habits, and battery patterns. We characterize the applications by offline data collection for handling the conflicting requirements of multiple concurrent applications. Further, we consider user’s activities and battery patterns for user and battery-aware resource management. Finally, we propose a comprehensive resource management approach which considers dynamic variation in embedded systems and formulate a goal for resource management based on that.
The approaches presented in this dissertation focus on dynamic variation in the embedded systems and responding to the variation efficiently. The approaches consider minimizing energy consumption, satisfying performance requirements of the applications, respecting power constraints, satisfying user requirements, and maximizing battery cycle life. Each resource management approach is evaluated and compared against the relevant state-of-the-art resource management frameworks
Concurrent Application Bias Scheduling for Energy Efficiency of Heterogeneous Multi-Core platforms
Minimizing energy consumption of concurrent applications on
heterogeneous multi-core platforms is challenging given the diversity in
energy-performance profiles of both the applications and hardware.
Adaptive learning techniques made the exhaustive Pareto-optimal space
exploration practically feasible to identify an energy-efficient
configuration. The existing approaches consider a single application's
characteristic for optimizing energy consumption. However, an optimal
configuration for a given single application may not be optimal when a
new application arrives. Although some related works do consider
concurrent applications scenarios, these approaches overlook the weight
of total energy consumption per application, restricting those from
prioritizing among applications. We address this limitation by
considering the mutual effect of concurrent applications on system-wide
energy consumption to adapt resource configuration at run-time. We
characterize each application's power-performance profile as a weighted
bias through off-line profiling. We infer this model combined with an
on-line predictive strategy to make resource allocation decisions for
minimizing energy consumption while honoring performance requirements.
The proposed strategy is implemented as a user-space process and
evaluated on a heterogeneous hardware platform of Odroid XU3 over the
Rodinia benchmark suite. Experimental results show up to 61% of energy
saving compared to the standard baseline of Linux governors and up to
27% of energy gain compared to state-of-the-art adaptive learning-based
resource management techniques.</p
Proceedings of the 2019 Design, Automation & Test in Europe (DATE)
Run-time resource allocation of heterogeneous multi-core systems is challenging with varying workloads and limited power and energy budgets. User interaction within these systems changes the performance requirements, often conflicting with concurrent applications' objective and system constraints. Current resource allocation approaches focus on optimizing fixed objective, ignoring the variation in system and applications' objective at run-time. For an efficient resource allocation, the system has to operate autonomously by formulating a hierarchy of goals. We present goal-driven autonomy (GDA) for on-chip resource allocation decisions, which allows systems to generate and prioritize goals in response to the workload and system dynamic variation. We implemented a proof-of-concept resource management framework that integrates the proposed goal management control to meet power, performance and user requirements simultaneously. Experimental results on an Exynos platform containing ARM's big. LITTLE-based heterogeneous multi-processor (HMP) show the effectiveness of GDA in efficient resource allocation in comparison with existing fixed objective policies
2020 33rd International Conference on VLSI Design and 2020 19th International Conference on Embedded Systems (VLSID)
Modern battery powered Embedded Systems (ES)
must provide a high performance with minimal energy consumption to enhance the user experience. However, these two are
often conflicting objectives. In current ES resource management
techniques, user behavior and preferences are only indirectly
or not at all considered. In this paper, we present a novel
user- and battery-aware resource management framework for
multi-processor architectures that considers these conflicting
requirements and dynamic unknown workloads at run-time to
maximize user satisfaction. Proposed technique learns user’s
habits to dynamically adjust the resource management schemes
based on the data it collects regarding user’s plug-in behavior,
battery charge status, and workloads variability at run-time. This
information is used to improve the balance between performance
and energy consumption, and thus optimize the Quality of Experience (QoE). Our evaluation results show that our framework
enhances the user experience by 22% in comparison with the
existing state-of-the-art.</p